Download Score level timbre transformations of violin sounds
The ability of a sound synthesizer to provide realistic sounds depends to a great extent on the availability of expressive controls. One of the most important expressive features a user of the synthesizer would desire to have control of, is timbre. Timbre is a complex concept related to many musical indications in a score such as dynamics, accents, hand position, string played, or even indications referring timbre itself. Musical indications are in turn related to low level performance controls such as bow velocity or bow force. With the help of a data acquisition system able to record sound synchronized to performance controls and aligned to the performed score and by means of statistical analysis, we are able to model the interrelations among sound (timbre), controls and musical score indications. In this paper we present a procedure for score-controlled timbre transformations of violin sounds within a sample based synthesizer. Given a sound sample and its trajectory of performance controls: 1) a transformation of the controls trajectory is carried out according to the score indications, 2) a new timbre corresponding to the transformed trajectory is predicted by means of a timbre model that relates timbre with performance controls and 3) the timbre of the original sound is transformed by applying a timevarying filter calculated frame by frame as the difference of the original and predicted envelopes.
Download Augmenting Sound Mosaicing with Descriptor-Driven Transformation
We propose a strategy for integrating descriptor-driven transformation into mosaicing sound synthesis, in which samples are selected by taking into account potential distances in the transformed space. Target descriptors consisting of chroma, mel-spaced filter banks, and energy are modeled with respect to windowed bandlimited resampling and mel-spaced filters, and later corrected with gain. These transformations, however simple, allow some adaptation of textural sound material to musical contexts.
Download Mapping blowing pressure and sound features in recorder playing
This paper presents a data-driven approach to the construction of mapping models relating sound features and blowing pressure in recorder playing. Blowing pressure and sound feature data are synchronously obtained from real performance: blowing pressure is measured by means of a piezoelectric transducer inserted into the mouth piece of a modified recorder, while produced sound is acquired using a close-field microphone. Acquired sound is analyzed frame-by-frame, and features are extracted so that original sound can be reconstructed with enough fidelity. A multi-modal database of aligned blowing pressure and sound feature signals is constructed from real performance recordings designed to cover basic performance contexts. Out of the gathered data, two types of mapping models are constructed using artificial neural networks: (i) a model able to generate sound feature signals from blowing pressure signals, and therefore used to produce synthetic sound from recorded blowing pressure profiles via additive synthesis; and (ii) a model able to estimate the blowing pressure from extracted sound features.
Download Synchronization of intonation adjustments in violin duets: towards an objective evaluation of musical interaction
In ensemble music performance, such as a string quartet or duet, the musicians interact and influence each other’s performance via a multitude of parameters – including tempo, dynamics, articulation of musical phrases and, depending on the type of instrument, intonation. This paper presents our ongoing research on the effect of interaction between violinists, in terms of intonation. We base our analysis on a series of experiments with professional as well as amateur musicians playing in duet and solo experimental set-ups, and then apply a series of interdependence measures on each violinist’s pitch deviations from the score. Our results show that while it is possible to, solely based on intonation, distinguish between solo and duet performances for simple cases, there is a multitude of underlying factors that need to be analyzed before these techniques can be applied to more complex pieces and/or non-experimental situations.
Download Constrained Pole Optimization for Modal Reverberation
The problem of designing a modal reverberator to match a measured room impulse response is considered. The modal reverberator architecture expresses a room impulse response as a parallel combination of resonant filters, with the pole locations determined by the room resonances and decay rates, and the zeros by the source and listener positions. Our method first estimates the pole positions in a frequency-domain process involving a series of constrained pole position optimizations in overlapping frequency bands. With the pole locations in hand, the zeros are fit to the measured impulse response using least squares. Example optimizations for a mediumsized room show a good match between the measured and modeled room responses.
Download Joint modeling of impedance and radiation as a recursive parallel filter structure for efficient synthesis of wind instrument sound
In the context of efficient synthesis of wind instrument sound, we introduce a technique for joint modeling of input impedance and sound pressure radiation as digital filters in parallel form, with the filter coefficients derived from experimental data. In a series of laboratory measurements taken on an alto saxophone, the input impedance and sound pressure radiation responses were obtained for each fingering. In a first analysis step, we iteratively minimize the error between the frequency response of an input impedance measurement and that of a digital impedance model constructed from a parallel filter structure akin to the discretization of a modal expansion. With the modal coefficients in hand, we propose a digital model for sound pressure radiation which relies on the same parallel structure, thus suitable for coefficient estimation via frequency-domain least-squares. For modeling the transition between fingering positions, we propose a simple model based on linear interpolation of input impedance and sound pressure radiation models. For efficient sound synthesis, the common impedance-radiation model is used to construct a joint reflectanceradiation digital filter realized as a digital waveguide termination that is interfaced to a reed model based on nonlinear scattering.